Efficient coding for multi-source networks using Gács-Körner common information
نویسندگان
چکیده
Consider a multi-source network coding problem with correlated sources. While the fundamental limits are known, achieving them, in general, involves a computational burden due to the complex decoding process. Efficient solutions, on the other hand, are by large based on source and network coding separation, thus imposing strict topological constraints on the networks which can be solved. In this work, we introduce a novel notion of separation of source and network coding using Gács-Körner Common Information (CI). Unlike existing notions of separation, the sufficient condition for this separation to hold depends on the source structure rather than the network topology. Using the suggested separation scheme, we tackle three important multisource problems. The first is the multi-source multicast. We construct efficient, zero error source codes, and via properties of the CI completely characterize the resulting rate region. The second is broadcast with side information. We establish a duality between this problem and the classical problem of degraded message set broadcast, and give two code constructions and their associated regions. Finally, we consider the Ahlswede-Korner problem in a network, and give an efficient solution which is tight under the CI constraints. Keywords—Network Coding, Common Information, Distributed Source-Coding
منابع مشابه
A new dual to the Gács-Körner common information defined via the Gray-Wyner system
We consider jointly distributed random variables X and Y. After describing the Gács-Körner common information between the random variables from the viewpoint of the capacity region of the Gray-Wyner system, we propose a new notion of common information between the random variables that is dual to the Gács-Körner common information, from this viewpoint, in a well-defined sense. We characterize t...
متن کاملMaximum Entropy Functions: Approximate Gacs-Korner for Distributed Compression
Consider two correlated sources X and Y generated from a joint distribution pX,Y . Their Gács-Körner Common Information, a measure of common information that exploits the combinatorial structure of the distribution pX,Y , leads to a source decomposition that exhibits the latent common parts in X and Y . Using this source decomposition we construct an efficient distributed compression scheme, wh...
متن کاملMultiterminal Secret Key Agreement at Asymptotically Zero Discussion Rate
In the multiterminal secret key agreement problem, a set of users want to discuss with each other until they share a common secret key independent of their discussion. We want to characterize the maximum secret key rate, called the secrecy capacity, asymptotically when the total discussion rate goes to zero. In the case of only two users, the capacity is equal to the Gács–Körner common informat...
متن کاملCommon information revisited
One of the main notions of information theory is the notion of mutual information in two messages (two random variables in Shannon information theory or two binary strings in algorithmic information theory). The mutual information in x and y measures how much the transmission of x can be simplified if both the sender and the recipient know y in advance. Gács and Körner gave an example where mut...
متن کاملHypothesis testing via a comparator and hypercontractivity
This paper investigates the best achievable performance by a hypothesis test satisfying a structural constraint: two functions are computed at two different terminals and the detector consists of a simple comparator checking if the functions agree. Such tests arise as part of study of fundamental limits of channel coding, and hypothesis testing with communication (rate) constraints. A simple ex...
متن کامل